1,059 research outputs found

    Multi-Schema-Version Data Management

    Get PDF

    Aprotinin may increase mortality in low and intermediate risk but not in high risk cardiac surgical patients compared to tranexamic acid and ε-aminocaproic acid - a meta-analysis of randomised and observational trials of over 30.000 patients

    Get PDF
    Background: To compare the effect of aprotinin with the effect of lysine analogues (tranexamic acid and ε-aminocaproic acid) on early mortality in three subgroups of patients: low, intermediate and high risk of cardiac surgery. Methods and Findings: We performed a meta-analysis of randomised controlled trials and observational with the following data sources: Medline, Cochrane Library, and reference lists of identified articles. The primary outcome measure was early (in-hospital/30-day) mortality. The secondary outcome measures were any transfusion of packed red blood cells within 24 hours after surgery, any re-operation for bleeding or massive bleeding, and acute renal dysfunction or failure within the selected cited publications, respectively. Out of 328 search results, 31 studies (15 trials and 16 observational studies) included 33,501 patients. Early mortality was significantly increased after aprotinin vs. lysine analogues with a pooled risk ratio (95% CI) of 1.58 (1.13–2.21), p<0.001 in the low (n = 14,297) and in the intermediate risk subgroup (1.42 (1.09–1.84), p<0.001; n = 14,427), respectively. Contrarily, in the subgroup of high risk patients (n = 4,777), the risk for mortality did not differ significantly between aprotinin and lysine analogues (1.03 (0.67–1.58), p = 0.90). Conclusion: Aprotinin may be associated with an increased risk of mortality in low and intermediate risk cardiac surgery, but presumably may has no effect on early mortality in a subgroup of high risk cardiac surgery compared to lysine analogues. Thus, decisions to re-license aprotinin in lower risk patients should critically be debated. In contrast, aprotinin might probably be beneficial in high risk cardiac surgery as it reduces risk of transfusion and bleeding complications

    A versatile design for resonant guided-wave parametric down-conversion sources for quantum repeaters

    Get PDF
    Quantum repeaters - fundamental building blocks for long-distance quantum communication - are based on the interaction between photons and quantum memories. The photons must fulfil stringent requirements on central frequency, spectral bandwidth and purity in order for this interaction to be efficient. We present a design scheme for monolithically integrated resonant photon-pair sources based on parametric down-conversion in nonlinear waveguides, which facilitate the generation of such photons. We investigate the impact of different design parameters on the performance of our source. The generated photon spectral bandwidths can be varied between several tens of MHz up to around 1 1\,GHz, facilitating an efficient coupling to different memories. The central frequency of the generated photons can be coarsely tuned by adjusting the pump frequency, poling period and sample temperature and we identify stability requirements on the pump laser and sample temperature that can be readily fulfilled with off-the-shelve components. We find that our source is capable of generating high-purity photons over a wide range of photon bandwidths. Finally, the PDC emission can be frequency fine-tuned over several GHz by simultaneously adjusting the sample temperature and pump frequency. We conclude our study with demonstrating the adaptability of our source to different quantum memories.Comment: 10 pages, 8 figure

    Online horizontal partitioning of heterogeneous data

    Get PDF
    In an increasing number of use cases, databases face the challenge of managing heterogeneous data. Heterogeneous data is characterized by a quickly evolving variety of entities without a common set of attributes. These entities do not show enough regularity to be captured in a traditional database schema. A common solution is to centralize the diverse entities in a universal table. Usually, this leads to a very sparse table. Although today’s techniques allow efficient storage of sparse universal tables, query efficiency is still a problem. Queries that address only a subset of attributes have to read the whole universal table includingmany irrelevant entities. Asolution is to use a partitioning of the table, which allows pruning partitions of irrelevant entities before they are touched. Creating and maintaining such a partitioning manually is very laborious or even infeasible, due to the enormous complexity. Thus an autonomous solution is desirable. In this article, we define the Online Partitioning Problem for heterogeneous data. We sketch how an optimal solution for this problem can be determined based on hypergraph partitioning. Although it leads to the optimal partitioning, the hypergraph approach is inappropriate for an implementation in a database system. We present Cinderella, an autonomous online algorithm for horizontal partitioning of heterogeneous entities in universal tables. Cinderella is designed to keep its overhead low by operating online; it incrementally assigns entities to partition while they are touched anyway duringmodifications. This enables a reasonable physical database design at runtime instead of static modeling

    Design Reiteration of a Chimney Gas Flowmeter for Natural CO2 Emissions from Mofettes: Differential Pressure Measurement Increases Resolution and Accuracy

    Get PDF
    In this thesis, an established device for in situ gas measurements of the natural CO2 emissions from a mofetta is being improved in design and measurement principles. A cyclically erupting mofetta that is continuously submerged under the surrounding water table is observed. For the measurement of the volumetric flow rate, the previously utilized cup anemometer is discarded and instead, a self-made and self-calibrated differential pressure flowmeter is introduced. During field performance, it is validated as an accurate and high-resolution approach on flow rate quantification that is highly adapted to the conditions of the mofetta, therefore presenting a considerable improvement to the previous installment. The devices gas-channelling and sensor-carrying chimney has been redesigned completely based on a highly modular approach. This change makes field maintenance much more efficient, improving the devices flexibility by a considerable amount. Measurements on the 9th of August 2023 verified the systems ability to perform under field conditions and lead to the observation of an anomaly where the flow rate increased by ∼ 40 % over the course of 1 minute coupled with a temperature rise of 0.8 K. Similarities and differences between this anomaly and anomalies detected with previous iterations of the device are discussed. High resolution pressure data is obtained which leads to a temporal quantification of the exhaust dynamics of the mofetta on a small time scale. Compared to previous measurements conduced in winter of 2022, the main frequency window in which the mofetta erupts has shifted from ∼ 4 seconds to ∼ 3 seconds. Furthermore, the flow rate has increased by ∼ 36 - 41 % during calibration experiments and by ∼ 14 - 24 % during operation. These observations suggest a seasonal dependency of the mofettas exhaust. It is discussed that this dependency may be caused by an increase in evaporation coupled with a decrease in precipitation during the summer which induces a drop in water table height. As a consequence, the hydrostatic pressure the upstreaming gas needs to overcome to reach the atmosphere decreases, resulting in a rise in frequency and consequentially an increase in flow rate

    Multi-Schema-Version Data Management

    Get PDF
    Modern agile software development methods allow to continuously evolve software systems by easily adding new features, fixing bugs, and adapting the software to changing requirements and conditions while it is continuously used by the users. A major obstacle in the agile evolution is the underlying database that persists the software system’s data from day one on. Hence, evolving the database schema requires to evolve the existing data accordingly—at this point, the currently established solutions are very expensive and error-prone and far from agile. In this thesis, we present InVerDa, a multi-schema-version database system to facilitate agile database development. Multi-schema-version database systems provide multiple schema versions within the same database, where each schema version itself behaves like a regular single-schema database. Creating new schema versions is very simple to provide the desired agility for database development. All created schema versions can co-exist and write operations are immediately propagated between schema versions with a best-effort strategy. Developers do not have to implement the propagation logic of data accesses between schema versions by hand, but InVerDa automatically generates it. To facilitate multi-schema-version database systems, we equip developers with a relational complete and bidirectional database evolution language (BiDEL) that allows to easily evolve existing schema versions to new ones. BiDEL allows to express the evolution of both the schema and the data both forwards and backwards in intuitive and consistent operations; the BiDEL evolution scripts are orders of magnitude shorter than implementing the same behavior with standard SQL and are even less likely to be erroneous, since they describe a developer’s intention of the evolution exclusively on the level of tables without further technical details. Having the developers’ intentions explicitly given in the BiDEL scripts further allows to create a new schema version by merging already existing ones. Having multiple co-existing schema versions in one database raises the need for a sophisticated physical materialization. Multi-schema-version database systems provide full data independence, hence the database administrator can choose a feasible materialization, whereby the multi-schema-version database system internally ensures that no data is lost. The search space of possible materializations can grow exponentially with the number of schema versions. Therefore, we present an adviser that releases the database administrator from diving into the complex performance characteristics of multi-schema-version database systems and merely proposes an optimized materialization for a given workload within seconds. Optimized materializations have shown to improve the performance for a given workload by orders of magnitude. We formally guarantee data independence for multi-schema-version database systems. To this end, we show that every single schema version behaves like a regular single-schema database independent of the chosen physical materialization. This important guarantee allows to easily evolve and access the database in agile software development—all the important features of relational databases, such as transaction guarantees, are preserved. To the best of our knowledge, we are the first to realize such a multi-schema-version database system that allows agile evolution of production databases with full support of co-existing schema versions and formally guaranteed data independence
    • …
    corecore